184 resultados para core set

em Queensland University of Technology - ePrints Archive


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research has lead to a proliferation of multi-attribute scales to understand the motives for sport event attendance. The large number of potential motives, coupled with the long questionnaires needed to measure them, creates challenges for sport marketing research in natural populations. This research brings parsimony to the study of sport consumer behaviour by developing and testing a core set of five SportWay facets of motivation. Results provide guidance to sport marketing professionals and academics in survey development decisions related to selecting the most appropriate motives and items.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Goals of work: The aim of this secondary data analysis was to investigate symptom clusters over time for symptom management of a patient group after commencing adjuvant chemotherapy. Materials and methods: A prospective longitudinal study of 219 cancer outpatients conducted within 1 month of commencing chemotherapy (T1), 6 months (T2), and 12 months (T3) later. Patients' distress levels were assessed for 42 physical symptoms on a clinician-modified Rotterdam Symptom Checklist. Symptom clusters were identified in exploratory factor analyses at each time. Symptom inclusion in clusters was determined from structure coefficients. Symptoms could be associated with multiple clusters. Stability over time was determined from symptom cluster composition and the proportion of symptoms in the initial symptom clusters replicated at later times. Main results Fatigue and daytime sleepiness were the most prevalent distressing symptoms over time. The median number of concurrent distressing symptoms approximated 7, over time. Five consistent clusters were identified at T1, 2, and T3. An additional two clusters were identified at 12 months, possibly due to less variation in distress levels. Weakness and fatigue were each associated with two, four, and five symptom clusters at T1, T2, and T3, respectively, potentially suggesting different causal mechanisms. Conclusion: Stability is a necessary attribute of symptom clusters, but definitional clarification is required. We propose that a core set of concurrent symptoms identifies each symptom cluster, signifying a common cause. Additional related symptoms may be included over time. Further longitudinal investigation is required to identify symptom clusters and the underlying causes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: Radiation safety principles dictate that imaging procedures should minimise the radiation risks involved, without compromising diagnostic performance. This study aims to define a core set of views that maximises clinical information yield for minimum radiation risk. Angiographers would supplement these views as clinically indicated. Methods: An algorithm was developed to combine published data detailing the quality of information derived for the major coronary artery segments through the use of a common set of views in angiography with data relating to the dose–area product and scatter radiation associated with these views. Results: The optimum view set for the left coronary system comprised four views: left anterior oblique (LAO) with cranial (Cr) tilt, shallow right anterior oblique (AP-RAO) with caudal (Ca) tilt, RAO with Ca tilt and AP-RAO with Cr tilt. For the right coronary system three views were identified: LAO with Cr tilt, RAO and AP-RAO with Cr tilt. An alternative left coronary view set including a left lateral achieved minimally superior efficiency (,5%), but with an ,8% higher radiation dose to the patient and 40% higher cardiologist dose. Conclusion: This algorithm identifies a core set of angiographic views that optimises the information yield and minimises radiation risk. This basic data set would be supplemented by additional clinically determined views selected by the angiographer for each case. The decision to use additional views for diagnostic angiography and interventions would be assisted by referencing a table of relative radiation doses for the views being considered.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a combined structure for using real, complex, and binary valued vectors for semantic representation. The theory, implementation, and application of this structure are all significant. For the theory underlying quantum interaction, it is important to develop a core set of mathematical operators that describe systems of information, just as core mathematical operators in quantum mechanics are used to describe the behavior of physical systems. The system described in this paper enables us to compare more traditional quantum mechanical models (which use complex state vectors), alongside more generalized quantum models that use real and binary vectors. The implementation of such a system presents fundamental computational challenges. For large and sometimes sparse datasets, the demands on time and space are different for real, complex, and binary vectors. To accommodate these demands, the Semantic Vectors package has been carefully adapted and can now switch between different number types comparatively seamlessly. This paper describes the key abstract operations in our semantic vector models, and describes the implementations for real, complex, and binary vectors. We also discuss some of the key questions that arise in the field of quantum interaction and informatics, explaining how the wide availability of modelling options for different number fields will help to investigate some of these questions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Exponential growth of genomic data in the last two decades has made manual analyses impractical for all but trial studies. As genomic analyses have become more sophisticated, and move toward comparisons across large datasets, computational approaches have become essential. One of the most important biological questions is to understand the mechanisms underlying gene regulation. Genetic regulation is commonly investigated and modelled through the use of transcriptional regulatory network (TRN) structures. These model the regulatory interactions between two key components: transcription factors (TFs) and the target genes (TGs) they regulate. Transcriptional regulatory networks have proven to be invaluable scientific tools in Bioinformatics. When used in conjunction with comparative genomics, they have provided substantial insights into the evolution of regulatory interactions. Current approaches to regulatory network inference, however, omit two additional key entities: promoters and transcription factor binding sites (TFBSs). In this study, we attempted to explore the relationships among these regulatory components in bacteria. Our primary goal was to identify relationships that can assist in reducing the high false positive rates associated with transcription factor binding site predictions and thereupon enhance the reliability of the inferred transcription regulatory networks. In our preliminary exploration of relationships between the key regulatory components in Escherichia coli transcription, we discovered a number of potentially useful features. The combination of location score and sequence dissimilarity scores increased de novo binding site prediction accuracy by 13.6%. Another important observation made was with regards to the relationship between transcription factors grouped by their regulatory role and corresponding promoter strength. Our study of E.coli ��70 promoters, found support at the 0.1 significance level for our hypothesis | that weak promoters are preferentially associated with activator binding sites to enhance gene expression, whilst strong promoters have more repressor binding sites to repress or inhibit gene transcription. Although the observations were specific to �70, they nevertheless strongly encourage additional investigations when more experimentally confirmed data are available. In our preliminary exploration of relationships between the key regulatory components in E.coli transcription, we discovered a number of potentially useful features { some of which proved successful in reducing the number of false positives when applied to re-evaluate binding site predictions. Of chief interest was the relationship observed between promoter strength and TFs with respect to their regulatory role. Based on the common assumption, where promoter homology positively correlates with transcription rate, we hypothesised that weak promoters would have more transcription factors that enhance gene expression, whilst strong promoters would have more repressor binding sites. The t-tests assessed for E.coli �70 promoters returned a p-value of 0.072, which at 0.1 significance level suggested support for our (alternative) hypothesis; albeit this trend may only be present for promoters where corresponding TFBSs are either all repressors or all activators. Nevertheless, such suggestive results strongly encourage additional investigations when more experimentally confirmed data will become available. Much of the remainder of the thesis concerns a machine learning study of binding site prediction, using the SVM and kernel methods, principally the spectrum kernel. Spectrum kernels have been successfully applied in previous studies of protein classification [91, 92], as well as the related problem of promoter predictions [59], and we have here successfully applied the technique to refining TFBS predictions. The advantages provided by the SVM classifier were best seen in `moderately'-conserved transcription factor binding sites as represented by our E.coli CRP case study. Inclusion of additional position feature attributes further increased accuracy by 9.1% but more notable was the considerable decrease in false positive rate from 0.8 to 0.5 while retaining 0.9 sensitivity. Improved prediction of transcription factor binding sites is in turn extremely valuable in improving inference of regulatory relationships, a problem notoriously prone to false positive predictions. Here, the number of false regulatory interactions inferred using the conventional two-component model was substantially reduced when we integrated de novo transcription factor binding site predictions as an additional criterion for acceptance in a case study of inference in the Fur regulon. This initial work was extended to a comparative study of the iron regulatory system across 20 Yersinia strains. This work revealed interesting, strain-specific difierences, especially between pathogenic and non-pathogenic strains. Such difierences were made clear through interactive visualisations using the TRNDifi software developed as part of this work, and would have remained undetected using conventional methods. This approach led to the nomination of the Yfe iron-uptake system as a candidate for further wet-lab experimentation due to its potential active functionality in non-pathogens and its known participation in full virulence of the bubonic plague strain. Building on this work, we introduced novel structures we have labelled as `regulatory trees', inspired by the phylogenetic tree concept. Instead of using gene or protein sequence similarity, the regulatory trees were constructed based on the number of similar regulatory interactions. While the common phylogentic trees convey information regarding changes in gene repertoire, which we might regard being analogous to `hardware', the regulatory tree informs us of the changes in regulatory circuitry, in some respects analogous to `software'. In this context, we explored the `pan-regulatory network' for the Fur system, the entire set of regulatory interactions found for the Fur transcription factor across a group of genomes. In the pan-regulatory network, emphasis is placed on how the regulatory network for each target genome is inferred from multiple sources instead of a single source, as is the common approach. The benefit of using multiple reference networks, is a more comprehensive survey of the relationships, and increased confidence in the regulatory interactions predicted. In the present study, we distinguish between relationships found across the full set of genomes as the `core-regulatory-set', and interactions found only in a subset of genomes explored as the `sub-regulatory-set'. We found nine Fur target gene clusters present across the four genomes studied, this core set potentially identifying basic regulatory processes essential for survival. Species level difierences are seen at the sub-regulatory-set level; for example the known virulence factors, YbtA and PchR were found in Y.pestis and P.aerguinosa respectively, but were not present in both E.coli and B.subtilis. Such factors and the iron-uptake systems they regulate, are ideal candidates for wet-lab investigation to determine whether or not they are pathogenic specific. In this study, we employed a broad range of approaches to address our goals and assessed these methods using the Fur regulon as our initial case study. We identified a set of promising feature attributes; demonstrated their success in increasing transcription factor binding site prediction specificity while retaining sensitivity, and showed the importance of binding site predictions in enhancing the reliability of regulatory interaction inferences. Most importantly, these outcomes led to the introduction of a range of visualisations and techniques, which are applicable across the entire bacterial spectrum and can be utilised in studies beyond the understanding of transcriptional regulatory networks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The effectiveness of any trapping system is highly dependent on the ability to accurately identify the specimens collected. For many fruit fly species, accurate identification (= diagnostics) using morphological or molecular techniques is relatively straightforward and poses few technical challenges. However, nearly all genera of pest tephritids also contain groups of species where single, stand-alone tools are not sufficient for accurate identification: such groups include the Bactrocera dorsalis complex, the Anastrepha fraterculus complex and the Ceratitis FAR complex. Misidentification of high-impact species from such groups can have dramatic consequences and negate the benefits of an otherwise effective trapping program. To help prevent such problems, this chapter defines what is meant by a species complex and describes in detail how the correct identification of species within a complex requires the use of an integrative taxonomic approach. Integrative taxonomy uses multiple, independent lines of evidence to delimit species boundaries, and the underpinnings of this approach from both the theoretical speciation literature and the systematics/taxonomy literature are described. The strength of the integrative approach lies in the explicit testing of hypotheses and the use of multiple, independent species delimitation tools. A case is made for a core set of species delimitation tools (pre- and post-zygotic compatibility tests, multi-locus phylogenetic analysis, chemoecological studies, and morphometric and geometric morphometric analyses) to be adopted as standards by tephritologists aiming to resolve economically important species complexes. In discussing the integrative approach, emphasis is placed on the subtle but important differences between integrative and iterative taxonomy. The chapter finishes with a case study that illustrates how iterative taxonomy applied to the B. dorsalis species complex led to incorrect taxonomic conclusions, which has had major implications for quarantine, trade, and horticultural pest management. In contrast, an integrative approach to the problem has resolved species limits in this taxonomically difficult group, meaning that robust diagnostics are now available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Node-based Local Mesh Generation (NLMG) algorithm, which is free of mesh inconsistency, is one of core algorithms in the Node-based Local Finite Element Method (NLFEM) to achieve the seamless link between mesh generation and stiffness matrix calculation, and the seamless link helps to improve the parallel efficiency of FEM. Furthermore, the key to ensure the efficiency and reliability of NLMG is to determine the candidate satellite-node set of a central node quickly and accurately. This paper develops a Fast Local Search Method based on Uniform Bucket (FLSMUB) and a Fast Local Search Method based on Multilayer Bucket (FLSMMB), and applies them successfully to the decisive problems, i.e. presenting the candidate satellite-node set of any central node in NLMG algorithm. Using FLSMUB or FLSMMB, the NLMG algorithm becomes a practical tool to reduce the parallel computation cost of FEM. Parallel numerical experiments validate that either FLSMUB or FLSMMB is fast, reliable and efficient for their suitable problems and that they are especially effective for computing the large-scale parallel problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prescribing errors remain a significant cause of patient harm. Safe prescribing is not just about writing a prescription, but involves many cognitive and decision-making steps. A set of national prescribing competencies for all prescribers (including non-medical) is needed to guide education and training curricula, assessment and credentialing of individual practitioners. We have identified 12 core competencies for safe prescribing which embody the four stages of the prescribing process – information gathering, clinical decision making, communication, and monitoring and review. These core competencies, along with their learning objectives and assessment methods, provide a useful starting point for teaching safe and effective prescribing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Remote Sensing Core Curriculum (RSCC) was initiated in 1993 to meet the demands for a college-level set of resources to enhance the quality of education across national and international campuses. The American Society of Photogrammetry and Remote Sensing adopted the RSCC in 1996 to sustain support of this educational initiative for its membership and collegiate community. A series of volumes, containing lectures, exercises, and data, is being created by expert contributors to address the different technical fields of remote sensing. The RSCC program is designed to operate on the Internet taking full advantage of the World Wide Web (WWW) technology for distance learning. The issues of curriculum development related to the educational setting, with demands on faculty, students, and facilities, is considered to understand the new paradigms for WWW-influenced computer-aided learning. The WWW is shown to be especially appropriate for facilitating remote sensing education with requirements for addressing image data sets and multimedia learning tools. The RSCC is located at http://www.umbc.edu/rscc. The Remote Sensing Core Curriculum (RSCC) was initiated in 1993 to meet the demands for a college-level set of resources to enhance the quality of education across national and international campuses. The American Society of Photogrammetry and Remote Sensing adopted the RSCC in 1996 to sustain support of this educational initiative for its membership and collegiate community. A series of volumes, containing lectures, exercises, and data, is being created by expert contributors to address the different technical fields of remote sensing. The RSCC program is designed to operate on the Internet taking full advantage of the World Wide Web (WWW) technology for distance learning. The issues of curriculum development related to the educational setting, with demands on faculty, students, and facilities, is considered to understand the new paradigms for WWW-influenced computer-aided learning. The WWW is shown to be especially appropriate for facilitating remote sensing education with requirements for addressing image data sets and multimedia learning tools. The RSCC is located at http://www.umbc.edu/rscc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A variety of sustainable development research efforts and related activities are attempting to reconcile the issues of conserving our natural resources without limiting economic motivation while also improving our social equity and quality of life. Land use/land cover change, occurring on a global scale, is an aggregate of local land use decisions and profoundly impacts our environment. It is therefore the local decision making process that should be the eventual target of many of the ongoing data collection and research efforts which strive toward supporting a sustainable future. Satellite imagery data is a primary source of data upon which to build a core data set for use by researchers in analyzing this global change. A process is necessary to link global change research, utilizing satellite imagery, to the local land use decision making process. One example of this is the NASA-sponsored Regional Data Center (RDC) prototype. The RDC approach is an attempt to integrate science and technology at the community level. The anticipated result of this complex interaction between research and the decision making communities will be realized in the form of long-term benefits to the public.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We identify relation completion (RC) as one recurring problem that is central to the success of novel big data applications such as Entity Reconstruction and Data Enrichment. Given a semantic relation, RC attempts at linking entity pairs between two entity lists under the relation. To accomplish the RC goals, we propose to formulate search queries for each query entity α based on some auxiliary information, so that to detect its target entity β from the set of retrieved documents. For instance, a pattern-based method (PaRE) uses extracted patterns as the auxiliary information in formulating search queries. However, high-quality patterns may decrease the probability of finding suitable target entities. As an alternative, we propose CoRE method that uses context terms learned surrounding the expression of a relation as the auxiliary information in formulating queries. The experimental results based on several real-world web data collections demonstrate that CoRE reaches a much higher accuracy than PaRE for the purpose of RC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been a paucity of research published in relation to the temporal aspect of destination image change over time. Given increasing investments in destination branding, research is needed to enhance understanding of how to monitor destination brand performance, of which destination image is the core construct, over time. This article reports the results of four studies tracking brand performance of a competitive set of five destinations, between 2003 and 2012. Results indicate minimal changes in perceptions held of the five destinations of interest over the 10 years, supporting the assertion of Gartner (1986) and Gartner and Hunt (1987) that destination image change will only occur slowly over time. While undertaken in Australia, the research approach provides DMOs in other parts of the world with a practical tool for evaluating brand performance over time; in terms of measures of effectiveness of past marketing communications, and indicators of future performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Boards of directors have legal and ethical responsibilities to be competent. Yet, in a world where business models and whole sectors are being disrupted by rapid information and technology change, a majority of directors lack IT governance knowledge and skills. Individual IT competency and collective board Enterprise Technology Governance capability is a global problem. Without capability, boards are potentially flying blind, and risk is increased and opportunities to lead and govern digital transformation lost. To address this capability gap, this research provides the first multi-industry validated Enterprise Technology Governance competency set for use in board evaluation, recruitment and professional development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the level of digital disruption that is affecting businesses around the globe, you might expect high levels of Governance of Enterprise Information and Technology (GEIT) capability within boards. Boards and their senior executives know technology is important. More than 90% of boards and senior executives currently identify technology as essential to their current businesses, and to their organization’s future. But as few as 16% have sufficient GEIT capability. Global Centre for Digital Business Transformation’s recent research contains strong indicators of the need for change. Despite board awareness of both the likelihood and impact of digital disruption, things digital are still not viewed as a board-level matter in 45% of companies. And, it’s not just the board. The lack of board attention to technology can be mirrored at senior executive level as well. When asked about their organization’s attitude towards digital disruption, 43% of executives said their business either did not recognise it as a priority or was not responding appropriately. A further 32% were taking a “follower” approach, a potentially risky move as we will explain. Given all the evidence that boards know information and technology (I&T***) is vital, that they understand the inevitably, impact and speed of digital change and disruption, why are so many boards dragging their heels? Ignoring I&T disruption and refusing to build capability at board level is nothing short of negligence. Too many boards risk flying blind without GEIT capability [2]. To help build decision quality and I&T governance capability, this research: • Confirms a pressing need to build individual competency and cumulative, across-board capability in governing I&T • Identifies six factors that have rapidly increased the need, risk and urgency • Finds that boards may risk not meeting their duty of care responsibilities when it comes to I&T oversight • Highlights barriers to building capability details three GEIT competencies that boards and executives can use for evaluation, selection, recruitment and professional development.